3 research outputs found

    Optimal column layout for hybrid workloads

    Get PDF
    Data-intensive analytical applications need to support both efficient reads and writes. However, what is usually a good data layout for an update-heavy workload, is not well-suited for a read-mostly one and vice versa. Modern analytical data systems rely on columnar layouts and employ delta stores to inject new data and updates. We show that for hybrid workloads we can achieve close to one order of magnitude better performance by tailoring the column layout design to the data and query workload. Our approach navigates the possible design space of the physical layout: it organizes each column’s data by determining the number of partitions, their corresponding sizes and ranges, and the amount of buffer space and how it is allocated. We frame these design decisions as an optimization problem that, given workload knowledge and performance requirements, provides an optimal physical layout for the workload at hand. To evaluate this work, we build an in-memory storage engine, Casper, and we show that it outperforms state-of-the-art data layouts of analytical systems for hybrid workloads. Casper delivers up to 2.32x higher throughput for update-intensive workloads and up to 2.14x higher throughput for hybrid workloads. We further show how to make data layout decisions robust to workload variation by carefully selecting the input of the optimization.http://www.vldb.org/pvldb/vol12/p2393-athanassoulis.pdfPublished versionPublished versio

    The need for a new I/O model

    Full text link
    http://cidrdb.org/cidr2021/papers/cidr2021_abstract04.pdfPublished versio

    Solid-State Storage and Work Sharing for Efficient Scaleup Data Analytics

    No full text
    Today, managing, storing and analyzing data continuously in order to gain additional insight is becoming commonplace. Data analytics engines have been traditionally optimized for read-only queries assuming that the main data reside on mechanical disks. The need for 24x7 operations in global markets and the rise of online and other quickly-reacting businesses make data freshness an additional design goal. Moreover, the increased requirements in information quality make semantic databases a key (often represented as graphs using the RDF data representation model). Last but not least, the performance requirements combined with the increasing amount of stored and managed data call for high-performance yet space-efficient access methods in order to support the desired concurrency and throughput. Innovative data management algorithms and careful use of the underlying hardware platform help us to address the aforementioned requirements. The volume of generated, stored and queried data is increasing exponentially, and new workloads often are comprised of time-generated data. At the same time the hardware is evolving with dramatic changes both in processing units and storage devices, where solid-state storage is becoming ubiquitous. In this thesis, we build workload-aware data access methods for data analytics - tailored for emerging time-generated workloads - which use solid-state storage, either (i) as an additional level in the memory hierarchy to enable real-time updates in a data analytics, or (ii) as standalone storage for applications involving support for knowledge-based data, and support for efficiently indexing archival and time-generated data. Building workload-aware and hardware-aware data management systems allows to increase their performance and to augment their functionality. The advancements in storage have led to a variety of storage devices with different characteristics (e.g., monetary cost, access times, durability, endurance, read performance vs. write performance), and the suitability of a method to an application depends on how it balances the different characteristics of the storage medium it uses. The data access methods proposed in this thesis - MaSM and BF-Tree - balance the benefits of solid-state storage and of traditional hard disks, and are suitable for time-generated data or datasets with similar organization, which include social, monitoring and archival applications. The study of work sharing in the context of data analytics paves the way to integrating shared database operators starting from shared scans to several data analytics engines, and the workload-aware physical data organization proposed for knowledge-based datasets - RDF-tuple - enables integration of diverse data sources into the same systems
    corecore